Goto

Collaborating Authors

 spatial ability


11Plus-Bench: Demystifying Multimodal LLM Spatial Reasoning with Cognitive-Inspired Analysis

Li, Chengzu, Wu, Wenshan, Zhang, Huanyu, Li, Qingtao, Gao, Zeyu, Xia, Yan, Hernández-Orallo, José, Vulić, Ivan, Wei, Furu

arXiv.org Artificial Intelligence

For human cognitive process, spatial reasoning and perception are closely entangled, yet the nature of this interplay remains underexplored in the evaluation of multimodal large language models (MLLMs). While recent MLLM advancements show impressive performance on reasoning, their capacity for human-like spatial cognition remains an open question. In this work, we introduce a systematic evaluation framework to assess the spatial reasoning abilities of state-of-the-art MLLMs relative to human performance. Central to our work is 11Plus-Bench, a high-quality benchmark derived from realistic standardized spatial aptitude tests. 11Plus-Bench also features fine-grained expert annotations of both perceptual complexity and reasoning process, enabling detailed instance-level analysis of model behavior. Through extensive experiments across 14 MLLMs and human evaluation, we find that current MLLMs exhibit early signs of spatial cognition. Despite a large performance gap compared to humans, MLLMs' cognitive profiles resemble those of humans in that cognitive effort correlates strongly with reasoning-related complexity. However, instance-level performance in MLLMs remains largely random, whereas human correctness is highly predictable and shaped by abstract pattern complexity. These findings highlight both emerging capabilities and limitations in current MLLMs' spatial reasoning capabilities and provide actionable insights for advancing model design.


Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models

Stogiannidis, Ilias, McDonagh, Steven, Tsaftaris, Sotirios A.

arXiv.org Artificial Intelligence

Vision-Language Models (VLMs) have recently emerged as powerful tools, excelling in tasks that integrate visual and textual comprehension, such as image captioning, visual question answering, and image-text retrieval. However, existing benchmarks for VLMs include spatial components, which often fail to isolate spatial reasoning from related tasks such as object detection or semantic comprehension. In this paper, we address these deficiencies with a multi-faceted approach towards understanding spatial reasoning. Informed by the diverse and multi-dimensional nature of human spatial reasoning abilities, we present a detailed analysis that first delineates the core elements of spatial reasoning: spatial relations, orientation and navigation, mental rotation, and spatial visualization, and then assesses the performance of these models in both synthetic and real-world images, bridging controlled and naturalistic contexts. W e analyze 13 state-of-the-art Vision-Language Models, uncovering pivotal insights into their spatial reasoning performance. Our results reveal profound shortcomings in current VLMs, with average accuracy across the 13 models approximating random chance, highlighting spatial reasoning as a persistent obstacle. This work not only exposes the pressing need to advance spatial reasoning within VLMs but also establishes a solid platform for future exploration.


Defining and Evaluating Visual Language Models' Basic Spatial Abilities: A Perspective from Psychometrics

Xu, Wenrui, Lyu, Dalin, Wang, Weihang, Feng, Jie, Gao, Chen, Li, Yong

arXiv.org Artificial Intelligence

The Theory of Multiple Intelligences underscores the hierarchical nature of cognitive capabilities. To advance Spatial Artificial Intelligence, we pioneer a psychometric framework defining five Basic Spatial Abilities (BSAs) in Visual Language Models (VLMs): Spatial Perception, Spatial Relation, Spatial Orientation, Mental Rotation, and Spatial Visualization. Benchmarking 13 mainstream VLMs through nine validated psychometric experiments reveals significant gaps versus humans (average score 24.95 vs. 68.38), with three key findings: 1) VLMs mirror human hierarchies (strongest in 2D orientation, weakest in 3D rotation) with independent BSAs (Pearson's r<0.4); 2) Smaller models such as Qwen2-VL-7B surpass larger counterparts, with Qwen leading (30.82) and InternVL2 lagging (19.6); 3) Interventions like chain-of-thought (0.100 accuracy gain) and 5-shot training (0.259 improvement) show limits from architectural constraints. Identified barriers include weak geometry encoding and missing dynamic simulation. By linking psychometric BSAs to VLM capabilities, we provide a diagnostic toolkit for spatial intelligence evaluation, methodological foundations for embodied AI development, and a cognitive science-informed roadmap for achieving human-like spatial intelligence.


Capturing and Explaining Trajectory Singularities using Composite Signal Neural Networks

Dubois, Hippolyte, Callet, Patrick Le, Coutrot, Antoine

arXiv.org Machine Learning

Spatial trajectories are ubiquitous and complex signals. Their analysis is crucial in many research fields, from urban planning to neuroscience. Several approaches have been proposed to cluster trajectories. They rely on hand-crafted features, which struggle to capture the spatio-temporal complexity of the signal, or on Artificial Neural Networks (ANNs) which can be more efficient but less interpretable. In this paper we present a novel ANN architecture designed to capture the spatio-temporal patterns characteristic of a set of trajectories, while taking into account the demographics of the navigators. Hence, our model extracts markers linked to both behaviour and demographics. We propose a composite signal analyser (CompSNN) combining three simple ANN modules. Each of these modules uses different signal representations of the trajectory while remaining interpretable. Our CompSNN performs significantly better than its modules taken in isolation and allows to visualise which parts of the signal were most useful to discriminate the trajectories.


Men Are Better At Maps Until Women Take This Course - Issue 54: The Unspoken

Nautilus

Puts, D.A., McDaniel, M.A., Jordan, C.L., & Breedlove, S.M. Spatial ability and prenatal androgens: Meta-analysis of CAH and digit ratio studies.


How to raise a genius: lessons from a 45-year study of super-smart children

#artificialintelligence

On a summer day in 1968, professor Julian Stanley met a brilliant but bored 12-year-old named Joseph Bates. The Baltimore student was so far ahead of his classmates in mathematics that his parents had arranged for him to take a computer-science course at Johns Hopkins University, where Stanley taught. Having leapfrogged ahead of the adults in the class, the child kept himself busy by teaching the FORTRAN programming language to graduate students. Unsure of what to do with Bates, his computer instructor introduced him to Stanley, a researcher well known for his work in psychometrics -- the study of cognitive performance. To discover more about the young prodigy's talent, Stanley gave Bates a battery of tests that included the SAT college-admissions exam, normally taken by university-bound 16- to 18-year-olds in the United States. Bates's score was well above the threshold for admission to Johns Hopkins, and prompted Stanley to search for a local high school that would let the child take advanced mathematics and science classes.


How to Raise a Genius: Lessons from a 45-Year Study of Supersmart Children

#artificialintelligence

On a summer day in 1968, professor Julian Stanley met a brilliant but bored 12-year-old named Joseph Bates. The Baltimore student was so far ahead of his classmates in mathematics that his parents had arranged for him to take a computer-science course at Johns Hopkins University, where Stanley taught. Having leapfrogged ahead of the adults in the class, the child kept himself busy by teaching the FORTRAN programming language to graduate students. Unsure of what to do with Bates, his computer instructor introduced him to Stanley, a researcher well known for his work in psychometrics--the study of cognitive performance. To discover more about the young prodigy's talent, Stanley gave Bates a battery of tests that included the SAT college-admissions exam, normally taken by university-bound 16- to 18-year-olds in the United States.


Anatomy Learning with Virtual Objects

Stull, Andrew T. (University of California, Santa Barbara) | Hegarty, Mary (University of California, Santa Barbara) | Mayer, Richard E. (University of California, Santa Barbara)

AAAI Conferences

In 3 experiments, participants learned bone anatomy by using a hand-held controller to rotate an on-screen 3D bone model. The on-screen bone included (OR condition) or did not include (no-OR condition) orientation references—visible lines marking its axes. The learning task involved rotating the on-screen bone to match target orientations. Learning outcomes were assessed by having participants identify anatomical features from different orientations. On the learning task, the OR group performed more accurately, directly, and quickly than the control group and high-spatial individuals outperformed low-spatial individuals. Assessments of anatomy learning indicated that under more challenging conditions, ORs elevated learning by low-spatial individuals to near that of high-spatial individuals. In Experiment 3, orientation references were shown to help learners avoid disorientation due to the symmetrical shape of the object.


Representations of Shape during Mental Rotation

Khooshabeh, Peter (University of California, Santa Barbara) | Hegarty, Mary (University of California, Santa Barbara)

AAAI Conferences

How is shape represented during spatial tasks such as mental rotation? This research investigated the format of mental representations of 3-D shapes during mental rotation. Specifically, we tested the extent to which visual information, such as color, is represented during mental rotation using methods ranging from reaction time studies, verbal protocol analysis, and eyetracking. Another set of studies examined whether people use piecemeal or holistic strategies to rotate complex objects. Results show that individuals with good rotation ability do not represent color during mental rotation and rotate whole shapes; whereas poor rotators do represent color and rotate individual pieces of the shape using piecemeal strategies. This work contributes to theories about cognitive shape processing by showing that different information processing strategies may be one cause of individual differences in mentally rotation performance.


Hippocampal Model of Rat Spatial Abilities Using Temporal Difference Learning

Foster, David J., Morris, Richard G. M., Dayan, Peter

Neural Information Processing Systems

Peter Dayan E25-210, MIT Cambridge, MA 02139 We provide a model of the standard watermaze task, and of a more challenging task involving novel platform locations, in which rats exhibit one-trial learning after a few days of training. The model uses hippocampal place cells to support reinforcement learning, and also, in an integrated manner, to build and use allocentric coordinates. 1 INTRODUCTION